22 research outputs found

    Design of Finite-Length Irregular Protograph Codes with Low Error Floors over the Binary-Input AWGN Channel Using Cyclic Liftings

    Full text link
    We propose a technique to design finite-length irregular low-density parity-check (LDPC) codes over the binary-input additive white Gaussian noise (AWGN) channel with good performance in both the waterfall and the error floor region. The design process starts from a protograph which embodies a desirable degree distribution. This protograph is then lifted cyclically to a certain block length of interest. The lift is designed carefully to satisfy a certain approximate cycle extrinsic message degree (ACE) spectrum. The target ACE spectrum is one with extremal properties, implying a good error floor performance for the designed code. The proposed construction results in quasi-cyclic codes which are attractive in practice due to simple encoder and decoder implementation. Simulation results are provided to demonstrate the effectiveness of the proposed construction in comparison with similar existing constructions.Comment: Submitted to IEEE Trans. Communication

    Lowering the Error Floor of LDPC Codes Using Cyclic Liftings

    Full text link
    Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which form the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle cc of length â„“(c)\ell(c) in the base graph such that the inverse image of cc in the lifted graph consists of only cycles of length strictly larger than â„“(c)\ell(c). The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, encoder and decoder structure, and in some cases, the code rate. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel

    A CSI-based Human Activity Recognition using Canny Edge Detector

    Get PDF
    Human Activity Recognition (HAR) is one of the hot topics in the field of human-computer interaction. It has a wide variety of applications in different tasks such as health rehabilitation, smart houses, smart grids, robotics, and human action prediction. HAR can be carried out through different approaches such as vision-based, sensor-based, radar-based, and Wi-Fi-based. Due to the ubiquitous and easyto-deploy characteristic of Wi-Fi devices, Wi-Fi-based HAR has gained the interest of both academia and industry in recent years.WiFi-based HAR can be implemented by two channel metrics: Channel State Information (CSI) and Received Signal Strength Indicator (RSSI). Recently, converting the CSI data to images has led to increasing the accuracy level of activity prediction. However, none of the previous research has focused on extracting the features of converted images using image-processing techniques. In this study, we investigate three available datasets, gathered using CSI property, and took advantage of Deep Learning (DL), with convolutional layers and edge detection technique to increase overall system accuracy. The canny edge detector extracts the most important features of the image, and giving it to the DL model empowers the prediction of activities. In all three datasets, we witnessed an improvement of 5%, 27%, and 37% in terms of accuracy

    Binary CEO Problem under Log-Loss with BSC Test-Channel Model

    Full text link
    In this paper, we propose an efficient coding scheme for the two-link binary Chief Executive Officer (CEO) problem under logarithmic loss criterion. The exact rate-distortion bound for a two-link binary CEO problem under the logarithmic loss has been obtained by Courtade and Weissman. We propose an encoding scheme based on compound LDGM-LDPC codes to achieve the theoretical bounds. In the proposed encoding, a binary quantizer using LDGM codes and a syndrome-coding employing LDPC codes are applied. An iterative joint decoding is also designed as a fusion center. The proposed CEO decoder is based on the sum-product algorithm and a soft estimator.Comment: 5 pages. arXiv admin note: substantial text overlap with arXiv:1801.0043

    Constrained Secrecy Capacity of Finite-Input Intersymbol Interference Wiretap Channels

    Full text link
    We consider reliable and secure communication over intersymbol interference wiretap channels (ISI-WTCs). In particular, we first examine the setup where the source at the input of an ISI-WTC is unconstrained and then, based on a general achievability result for arbitrary wiretap channels, we derive an achievable secure rate for this ISI-WTC. Afterwards, we examine the setup where the source at the input of an ISI-WTC is constrained to be a finite-state machine source (FSMS) of a certain order and structure. Optimizing the parameters of this FSMS toward maximizing the secure rate is a computationally intractable problem in general, and so, toward finding a local maximum, we propose an iterative algorithm that at every iteration replaces the secure rate function by a suitable surrogate function whose maximum can be found efficiently. Although the secure rates achieved in the unconstrained setup are potentially larger than the secure rates achieved in the constraint setup, the latter setup has the advantage of leading to efficient algorithms for estimating achievable secure rates and also has the benefit of being the basis of efficient encoding and decoding schemes.Comment: 32 pages, 6 figure

    A rate-compatible puncturing scheme for finite-length LDPC Codes

    No full text
    In this paper, we propose a rate-compatible puncturing scheme for finite-length low-density parity-check (LDPC) codes over the additive white Gaussian noise (AWGN) channel. The proposed method is applicable to any LDPC mother code, both regular and irregular, and constructs punctured codes which perform well in both the waterfall and the error floor regions for a wide range of code rates. The scheme selects code bits to be punctured one at a time and based on a sequence of criteria. An important selection criterion is the number of short cycles with low approximate cycle extrinsic message degree (ACE) in which a candidate bit node participates. Simulation results demonstrate that the ACE measure, which is most often the determining criterion in the final selection of the puncturing candidates, plays an important role in improving the performance of the codes in both the waterfall and the error-floor regions. These results also demonstrate that the proposed scheme is superior to the existing puncturing methods, particularly when a wide range of code rates is desirable

    Joint Distributed Source-Channel Decoding for LDPC-Coded Binary Markov Sources

    Get PDF
    We propose a novel joint decoding technique for distributed source-channel (DSC) coded systems for transmission of correlated binary Markov sources over additive white Gaussian noise (AWGN) channels. In the proposed scheme, relatively short-length, low-density parity-check (LDPC) codes are independently used to encode the bit sequences of each source. To reconstruct the original bit sequence, a joint source-channel decoding (JSCD) technique is proposed which exploits the knowledge of both temporal and source correlations. The JSCD technique is composed of two stages, which are iteratively performed. First, a sum-product (SP) decoder is serially concatenated with a BCJR decoder, where the knowledge of source memory is utilized during local (horizontal) iterations. Then, the estimate of correlation between the sources is used to update the concatenated decoder during global (vertical) iterations. Therefore, the correlation of the sources is assumed as side information in the subsequent global iteration of each concatenated decoder. From the simulation results of frame/bit error rate (FER/BER), we note that significant gains are achieved by the proposed decoding scheme with respect to the case where the correlation knowledge is not completely utilized at the decoder
    corecore